← Back to Contents
Note: This page's design, presentation and content have been created and enhanced using Claude (Anthropic's AI assistant) to improve visual quality and educational experience.
Week 5 β€’ Sub-Lesson 6

πŸ§ͺ Hands-On: Activities & Assessment

Putting it all together β€” comparative searches, citation verification, literature mapping, and your weekly assessment

What We'll Cover

This final session of Week 5 is entirely practical. You will work through three activities that build on everything we have covered this week, and we will introduce the weekly assessment. The activities are designed to give you hands-on experience with the tools, confront you with real hallucinated citations, and help you build the verification habits that will serve you throughout your research career.

πŸ” Activity 1: The Comparative Search Challenge

Objective

Run the same research question through 4 different tools and compare the results. This exercise makes the differences between tools concrete β€” you will see first-hand how different tools surface different papers, miss different things, and present information in different ways.

Setup

  1. Choose a research question from your own work.
  2. Run the same question through Semantic Scholar β€” use a natural language query.
  3. Run the same question through Elicit (free tier) β€” note how it interprets and restructures your question.
  4. Run the same question through Consensus (free tier) β€” pay attention to how it synthesises across papers.
  5. Run the same question through a traditional database relevant to your field (Google Scholar, PubMed, Web of Science, etc.) β€” use keyword search as you normally would.

What to Record (use the template provided)

  • How many results did each tool return?
  • How many results from each tool were genuinely relevant to your question?
  • Did any tool surface papers that none of the others found? Which ones?
  • Were there papers that appeared in the traditional database but NOT in any AI tool?
  • How did the AI tools' summaries compare to reading the actual abstracts?
  • Did any tool return results that seemed hallucinated or could not be verified?

πŸ’¬ Discussion Prompts

Which tool gave you the best starting point? Think about not just the number of results, but how quickly you could identify the most relevant papers.

Which tool surprised you (positively or negatively)? Was there a tool you expected to perform well that did not, or vice versa?

Would keyword search alone have found the same papers? Consider whether the AI-powered tools added genuine value beyond what careful keyword searching could achieve.

πŸ‘» Activity 2: The Citation Verification Exercise

Objective

Ask a general-purpose AI to generate references, then verify every single one. This is the exercise that makes the hallucinated citation problem real and personal β€” because you will almost certainly find fabricated references in your own results.

Setup

  1. Open ChatGPT, Claude, or Gemini (whichever you normally use for your research work).
  2. Ask: "Please provide 10 academic references on [your research topic]. Include the authors, year, title, journal, and DOI for each."
  3. For every single reference the AI provides, work through the verification checklist below. Do not skip any β€” the point is to verify all 10.

Verification Checklist: For each of the 10 references, work through these four checks:

1. Existence Check

  • Search the exact title (in quotes) on Google Scholar
  • Search on Semantic Scholar
  • If you cannot find it in either, it probably does not exist
  • Record: EXISTS / DOES NOT EXIST

2. Author Check

  • If the paper exists, are the authors the ones listed by the AI?
  • Check the authors' publication pages or institutional profiles
  • Record: CORRECT / INCORRECT / PARTIALLY CORRECT

3. Details Check

  • Is the year correct?
  • Is the journal or venue correct?
  • Is the DOI valid? (check at doi.org )
  • Record: Findings for each detail

4. Content Check

  • Read the abstract of any paper that does exist
  • Does the paper actually discuss what the AI said it discusses?
  • Record: ACCURATE DESCRIPTION / MISLEADING / WRONG

πŸ“‹ What to Submit

A verification table documenting your findings for all 10 references. Calculate your "hallucination rate" β€” what percentage of the AI's citations were fully or partially fabricated? Include this table as an appendix to your weekly assessment.

πŸ—ΊοΈ Activity 3: Literature Map Construction

Objective

Build a visual map of the literature for your research project. This activity moves from individual paper discovery to seeing the bigger picture β€” how papers in your field relate to each other, where the clusters are, and where the gaps might be.

Setup

  1. Choose 1-3 seed papers that are central to your research. These should be papers you know well and consider foundational to your work.
  2. Use Connected Papers OR ResearchRabbit to build a visual map from your seed papers.
  3. Explore the map: identify clusters, influential papers (large nodes or many connections), and recent additions at the edges.
  4. Screenshot your map for your portfolio β€” you will want to reference this later.

What to Look For

  • Distinct clusters or sub-communities β€” are there identifiable groups of papers that cite each other heavily but have fewer connections to other groups? These often represent different methodological approaches or theoretical traditions.
  • Central papers β€” which papers appear most connected? These are often seminal works or influential reviews that you should be familiar with.
  • Recent papers on the edges β€” papers at the periphery of the map, especially recent ones, may represent new directions or emerging sub-fields worth investigating.
  • Surprising connections β€” are there links between papers you would not have expected? These unexpected connections can lead to novel insights or interdisciplinary approaches.

🎯 Bonus Challenge

If you used Connected Papers, try the same seed paper in ResearchRabbit (or vice versa). Do they surface the same related papers? Where do they differ? The differences tell you something important about how each tool defines "relatedness" β€” and why using multiple tools gives you a more complete picture than relying on any single one.

πŸ“ Weekly Assessment

AI-Assisted Mini Literature Review (2000 words)

This week's assessment asks you to put everything together: use AI tools to discover, verify, and synthesise literature on a topic related to your research area. The assessment is designed to test not just your ability to use the tools, but your ability to use them critically and transparently.

Requirements

  1. Choose a topic related to your research area. It should be specific enough to be manageable in 2000 words but broad enough to find at least 10 sources.
  2. Use at least 2 AI tools from this week (document which ones and how you used them). Be specific β€” do not just say "I used Semantic Scholar". Describe your queries, what you found, and how you refined your search.
  3. Write a 2000-word mini literature review that synthesises at least 10 verified sources. This should be a genuine synthesis β€” not just summaries of individual papers listed one after another, but a coherent narrative that connects ideas across your sources.
  4. Include a Reliability Audit section (minimum 300 words of the 2000) documenting:
    • At least 3 instances where you verified or corrected AI-generated claims
    • Your citation verification process and results
    • What the AI tools did well and where they fell short
    • Which tools you found most and least useful, and why
  5. Include a brief Ethical Reflection (connecting to Week 4's frameworks) on:
    • Transparency: How would you disclose your AI use in a real paper?
    • What ethical considerations arose during your AI-assisted literature search?
    • How much were you using your critical thinking faculties in this process?

Assessment Criteria:

Quality of Literature Review 

  • Coherent synthesis (not just summaries of individual papers)
  • Appropriate scope and depth for a 2000-word review
  • Critical engagement with the literature β€” not just reporting what papers say, but evaluating and connecting them
  • All sources verified and correctly cited

Reliability Audit 

  • Thoroughness of verification documentation
  • Honest reporting of AI tool successes AND failures
  • Quality of reflection on tool strengths and limitations
  • Evidence of genuine critical engagement with AI outputs

Tool Use & Workflow

  • Evidence of using multiple tools strategically
  • Documentation of search process and decision-making
  • Appropriate tool selection for different tasks
  • Clear description of how tools complemented each other

Ethical Reflection

  • Meaningful engagement with ethical frameworks from Week 4
  • Practical application of ethical reasoning to AI-assisted research
  • Thoughtful consideration of transparency and disclosure

πŸ“€ Submission

Upload to Amathuba by the deadline indicated on the activity. Include your verification table from Activity 2 as an appendix. The appendix does not count towards the 2000-word limit.

Week 5 Summary & Key Takeaways

  • The AI literature tool landscape includes three categories: citation-based discovery, semantic search and synthesis, and grounded chat (RAG). Each category serves a different purpose in the research workflow.
  • Free tools (Semantic Scholar, Connected Papers, ResearchRabbit, NotebookLM, Google Scholar) can cover most of a postgrad researcher's needs. You do not need to pay for tools to do effective AI-assisted literature work.
  • Paid tools (Elicit, Consensus, Scite, SciSpace, Litmaps) add specific capabilities that are worth it in specific situations β€” do not pay for breadth, pay for the feature you need.
  • The hallucinated citation crisis is real and serious β€” general-purpose LLMs can fabricate a large percentage of citations, and this has already contaminated published research at the highest levels.
  • Verification is not optional β€” build the Five-Point Citation Check into your workflow. Every citation generated by an AI tool should be verified before it enters your work.
  • Claude (and other AI tools) become dramatically more useful when you give them structured prompts, clear constraints, and verified source documents rather than asking open-ended questions.
  • The most effective workflow combines multiple tools: AI for discovery and synthesis, traditional databases for verification, and always, always, checking your citations.

Looking ahead: Next week, we move from finding literature to producing it β€” AI for Writing and Communication . We will explore how AI tools can help with drafting, structuring, editing, and translating academic writing, while navigating the complex questions of authorship and originality that arise when AI contributes to the writing process.